1,773 research outputs found

    Towards learning free naive bayes nearest neighbor-based domain adaptation

    Get PDF
    As of today, object categorization algorithms are not able to achieve the level of robustness and generality necessary to work reliably in the real world. Even the most powerful convolutional neural network we can train fails to perform satisfactorily when trained and tested on data from different databases. This issue, known as domain adaptation and/or dataset bias in the literature, is due to a distribution mismatch between data collections. Methods addressing it go from max-margin classifiers to learning how to modify the features and obtain a more robust representation. Recent work showed that by casting the problem into the image-to-class recognition framework, the domain adaptation problem is significantly alleviated [23]. Here we follow this approach, and show how a very simple, learning free Naive Bayes Nearest Neighbor (NBNN)-based domain adaptation algorithm can significantly alleviate the distribution mismatch among source and target data, especially when the number of classes and the number of sources grow. Experiments on standard benchmarks used in the literature show that our approach (a) is competitive with the current state of the art on small scale problems, and (b) achieves the current state of the art as the number of classes and sources grows, with minimal computational requirements. © Springer International Publishing Switzerland 2015

    Judicial lobbying: The politics of labor law constitutional interpretation

    Get PDF
    This paper links the theory of interest groups influence over the legislature with that of congressional control over the judiciary. The resulting framework reconciles the theoretical literature of lobbying with the negative available evidence on the impact of lobbying over legislative outcomes, and sheds light to the determinants of lobbying in separation-of-powers systems. We provide conditions for judicial decisions to be sensitive to legislative lobbying, and find that lobbying falls the more divided the legislature is on the relevant issues. We apply this framework to analyze supreme court labor decisions in Argentina, and find results consistent with the predictions of the theory

    Are You Tampering With My Data?

    Full text link
    We propose a novel approach towards adversarial attacks on neural networks (NN), focusing on tampering the data used for training instead of generating attacks on trained models. Our network-agnostic method creates a backdoor during training which can be exploited at test time to force a neural network to exhibit abnormal behaviour. We demonstrate on two widely used datasets (CIFAR-10 and SVHN) that a universal modification of just one pixel per image for all the images of a class in the training set is enough to corrupt the training procedure of several state-of-the-art deep neural networks causing the networks to misclassify any images to which the modification is applied. Our aim is to bring to the attention of the machine learning community, the possibility that even learning-based methods that are personally trained on public datasets can be subject to attacks by a skillful adversary.Comment: 18 page

    Joint supervised and self-supervised learning for 3D real world challenges

    Get PDF
    Point cloud processing and 3D shape understanding are challenging tasks for which deep learning techniques have demonstrated great potentials. Still further progresses are essential to allow artificial intelligent agents to interact with the real world. In many practical conditions the amount of annotated data may be limited and integrating new sources of knowledge becomes crucial to support autonomous learning. Here we consider several scenarios involving synthetic and real world point clouds where supervised learning fails due to data scarcity and large domain gaps. We propose to enrich standard feature representations by leveraging self-supervision through a multi-task model that can solve a 3D puzzle while learning the main task of shape classification or part segmentation. An extensive analysis investigating few-shot, transfer learning and cross-domain settings shows the effectiveness of our approach with state-of-the-art results

    Positive-unlabeled learning for open set domain adaptation

    Get PDF
    Open Set Domain Adaptation (OSDA) focuses on bridging the domain gap between a labeled source domain and an unlabeled target domain, while also rejecting target classes that are not present in the source as unknown. The challenges of this task are closely related to those of Positive-Unlabeled (PU) learning where it is essential to discriminate between positive (known) and negative (unknown) class samples in the unlabeled target data. With this newly discovered connection, we leverage the theoretical framework of PU learning for OSDA and, at the same time, we extend PU learning to tackle uneven data distributions. Our method combines domain adversarial learning with a new non-negative risk estimator for PU learning based on self-supervised sample reconstruction. With experiments on digit recognition and object classification, we validate our risk estimator and demonstrate that our approach allows reducing the domain gap without suffering from negative transfer

    Dynamic Adaptation on Non-Stationary Visual Domains

    Full text link
    Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results

    'Part'ly first among equals: Semantic part-based benchmarking for state-of-the-art object recognition systems

    Full text link
    An examination of object recognition challenge leaderboards (ILSVRC, PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small differences amongst themselves in terms of error rate/mAP. To better differentiate the top performers, additional criteria are required. Moreover, the (test) images, on which the performance scores are based, predominantly contain fully visible objects. Therefore, `harder' test images, mimicking the challenging conditions (e.g. occlusion) in which humans routinely recognize objects, need to be utilized for benchmarking. To address the concerns mentioned above, we make two contributions. First, we systematically vary the level of local object-part content, global detail and spatial context in images from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12. Second, we propose an object-part based benchmarking procedure which quantifies classifiers' robustness to a range of visibility and contextual settings. The benchmarking procedure relies on a semantic similarity measure that naturally addresses potential semantic granularity differences between the category labels in training and test datasets, thus eliminating manual mapping. We use our procedure on the PPSS-12 dataset to benchmark top-performing classifiers trained on the ILSVRC-2012 dataset. Our results show that the proposed benchmarking procedure enables additional differentiation among state-of-the-art object classifiers in terms of their ability to handle missing content and insufficient object detail. Given this capability for additional differentiation, our approach can potentially supplement existing benchmarking procedures used in object recognition challenge leaderboards.Comment: Extended version of our ACCV-2016 paper. Author formatting modifie

    Integrated biorefinery strategy for poly(3-hydroxybutyrate) accumulation in Cupriavidus necator DSM 545 using a sugar rich syrup from cereal waste and acetate from gas fermentation

    Get PDF
    Poly(3-hydroxybutyrate) (PHB) is one of the most well-known biodegradable and biocompatible biopolymers produced by prokaryotic microorganisms. It belongs to the family of polyhydroxyalkanoates (PHAs), and it has gained significant attention in recent years due to its potential as a sustainable alternative to traditional petroleum-based plastics. Cupriavidus necator has been identified as a potential producer of PHB for industrial applications due to its ability to produce high amounts of the polymer under controlled conditions, using a wide range of waste substrates. In this study, the ability of Cupriavidus necator DSM 545 strain to produce PHB was tested in a fed-batch strategy providing two different organic substrates. The first is a sugar-based syrup (SBS), derived from cereal waste. The second is an acetate-rich medium obtained through CO2 -H2 fermentation by the acetogenic bacterium Acetobacterium woodii. The carbon sources were tested to improve the accumulation of PHB in the strain. C. necator DSM 545 proved to be able to grow and to perform high accumulation of biopolymer on waste substrates containing glucose, fructose, and acetate, reaching about 10 g/L of PHB, 83% of biopolymer accumulation in cell dry mass, in 48 h of fed-batch fermentation in 0.6 L working volume in a bioreactor. Moreover, a Life Cycle Assessment analysis was performed to evaluate the environmental impact of the process converting the sugar syrup alone and the integrated one. It demonstrated that the integrated process is more sustainable and that the most impactful step is the PHB production, followed by the polymer extraction
    • …
    corecore